68 research outputs found

    Matching Spec# Specifications

    Get PDF
    In this project, we develop a tool which compares two Spec# programs (C# code with specification contracts) for signature matching. The tool automatically identifies whether the two specifications are similar, and gives out a new Spec# program which needs to be verified. There are levels of standards to judge how similar these two Spec# programs’ specification is. This work contributes to the area of code reuse via match specifications: given a specification we aim to mark it to a similar specification and use its implementation to generate the implementation of the original specification. In this work we present the process of match specifications in detail for Spec# programs, we discuss how the method may be applied to other languages and indicate future work in this direction. We match specifications according to the work of Amy Moormann Zaremski and Jeannette M. Wing’s on "Match specifications of Software Components" [1]. This work proposes a lattice of possible technique for match specifications. Examples include Exact Pre/Post Match, Plug-In Match, Plug-In Post Match, Weak Post Match, Exact Predicate Match, Generalized Match and Specialized Match. We apply these definitions to Spec# programs, provide examples of verification matches and illustrate the level of matching that can be achieved automatically within the Spec# tools

    Assignment Algorithms for Multi-Robot Multi-Target Tracking with Sufficient and Limited Sensing Capability

    Full text link
    We study the problem of assigning robots with actions to track targets. The objective is to optimize the robot team's tracking quality which can be defined as the reduction in the uncertainty of the targets' states. Specifically, we consider two assignment problems given the different sensing capabilities of the robots. In the first assignment problem, a single robot is sufficient to track a target. To this end, we present a greedy algorithm (Algorithm 1) that assigns a robot with its action to each target. We prove that the greedy algorithm has a 1/2 approximation bound and runs in polynomial time. Then, we study the second assignment problem where two robots are necessary to track a target. We design another greedy algorithm (Algorithm 2) that assigns a pair of robots with their actions to each target. We prove that the greedy algorithm achieves a 1/3 approximation bound and has a polynomial running time. Moreover, we illustrate the performance of the two greedy algorithms in the ROS-Gazebo environment where the tracking patterns of one robot following one target using Algorithm 1 and two robots following one target using Algorithm 2 are clearly observed. Further, we conduct extensive comparisons to demonstrate that the two greedy algorithms perform close to their optimal counterparts and much better than their respective (1/2 and 1/3) approximation bounds

    Robust Joint Image Reconstruction from Color and Monochrome Cameras

    Get PDF
    International audienceRecent years have seen an explosion of the number of camera modules integratedinto individual consumer mobile devices, including configurations that contain multi-ple different types of image sensors. One popular configuration is to combine an RGBcamera for color imaging with a monochrome camera that has improved performancein low-light settings, as well as some sensitivity in the infrared. In this work we in-troduce a method to combine simultaneously captured images from such a two-camerastereo system to generate a high-quality, noise reduced color image. To do so, pixel-to-pixel alignment has to be constructed between the two captured monochrome and colorimages, which however, is prone to artifacts due to parallax. The joint image recon-struction is made robust by introducing a novel artifact-robust optimization formulation.We provide extensive experimental results based on the two-camera configuration of a commercially available cell phone

    Referring Expression Comprehension via Cross-Level Multi-Modal Fusion

    Full text link
    As an important and challenging problem in vision-language tasks, referring expression comprehension (REC) aims to localize the target object specified by a given referring expression. Recently, most of the state-of-the-art REC methods mainly focus on multi-modal fusion while overlooking the inherent hierarchical information contained in visual and language encoders. Considering that REC requires visual and textual hierarchical information for accurate target localization, and encoders inherently extract features in a hierarchical fashion, we propose to effectively utilize the rich hierarchical information contained in different layers of visual and language encoders. To this end, we design a Cross-level Multi-modal Fusion (CMF) framework, which gradually integrates visual and textual features of multi-layer through intra- and inter-modal. Experimental results on RefCOCO, RefCOCO+, RefCOCOg, and ReferItGame datasets demonstrate the proposed framework achieves significant performance improvements over state-of-the-art methods

    DelicacyNet for nutritional evaluation of recipes

    Get PDF
    In this paper, we are interested in how computers can be used to better serve us humans, such as helping humans control their nutrient intake, with higher level shortcuts. Specifically, the neural network model was used to help humans identify and analyze the content and proportion of nutrients in daily food intake, so as to help humans autonomously choose and reasonably match diets. In this study, we formed the program we wanted to obtain by establishing four modules, in which the imagination module sampled the environment, then relied on the encoder to extract the implicit features of the image, and finally relied on the decoder to obtain the required feature vector from the implicit features, and converted it into the battalion formation table information through the semantic output module. Finally, the model achieved extremely high accuracy on recipe1M+ and food2K datasets

    Amortizing Rate-1 OT and Applications to PIR and PSI

    Get PDF
    Recent new constructions of rate-1 OT [Döttling, Garg, Ishai, Malavolta, Mour, and Ostrovsky, CRYPTO 2019] have brought this primitive under the spotlight and the techniques have led to new feasibility results for private-information retrieval, and homomorphic encryption for branching programs. The receiver communication of this construction consists of a quadratic (in the sender\u27s input size) number of group elements for a single instance of rate-1 OT. Recently [Garg, Hajiabadi, Ostrovsky, TCC 2020] improved the receiver communication to a linear number of group elements for a single string-OT. However, most applications of rate-1 OT require executing it multiple times, resulting in large communication costs for the receiver. In this work, we introduce a new technique for amortizing the cost of multiple rate-1 OTs. Specifically, based on standard pairing assumptions, we obtain a two-message rate-1 OT protocol for which the amortized cost per string-OT is asymptotically reduced to only four group elements. Our results lead to significant communication improvements in PSI and PIR, special cases of SFE for branching programs. - PIR: We obtain a rate-1 PIR scheme with client communication cost of O(λlogN)O(\lambda\cdot\log N) group elements for security parameter λ\lambda and database size NN. Notably, after a one-time setup (or one PIR instance), any following PIR instance only requires communication cost O(logN)O(\log N) number of group elements. - PSI with unbalanced inputs: We apply our techniques to private set intersection with unbalanced set sizes (where the receiver has a smaller set) and achieve receiver communication of O((m+λ)logN)O((m+\lambda) \log N) group elements where m,Nm, N are the sizes of the receiver and sender sets, respectively. Similarly, after a one-time setup (or one PSI instance), any following PSI instance only requires communication cost O(mlogN)O(m \cdot \log N) number of group elements. All previous sublinear-communication non-FHE based PSI protocols for the above unbalanced setting were also based on rate-1 OT, but incurred at least O(λ2mlogN)O(\lambda^2 m \log N) group elements

    INT: Towards Infinite-frames 3D Detection with An Efficient Framework

    Full text link
    It is natural to construct a multi-frame instead of a single-frame 3D detector for a continuous-time stream. Although increasing the number of frames might improve performance, previous multi-frame studies only used very limited frames to build their systems due to the dramatically increased computational and memory cost. To address these issues, we propose a novel on-stream training and prediction framework that, in theory, can employ an infinite number of frames while keeping the same amount of computation as a single-frame detector. This infinite framework (INT), which can be used with most existing detectors, is utilized, for example, on the popular CenterPoint, with significant latency reductions and performance improvements. We've also conducted extensive experiments on two large-scale datasets, nuScenes and Waymo Open Dataset, to demonstrate the scheme's effectiveness and efficiency. By employing INT on CenterPoint, we can get around 7% (Waymo) and 15% (nuScenes) performance boost with only 2~4ms latency overhead, and currently SOTA on the Waymo 3D Detection leaderboard.Comment: accepted by ECCV202

    FusionFormer: A Multi-sensory Fusion in Bird's-Eye-View and Temporal Consistent Transformer for 3D Objection

    Full text link
    Multi-sensor modal fusion has demonstrated strong advantages in 3D object detection tasks. However, existing methods that fuse multi-modal features through a simple channel concatenation require transformation features into bird's eye view space and may lose the information on Z-axis thus leads to inferior performance. To this end, we propose FusionFormer, an end-to-end multi-modal fusion framework that leverages transformers to fuse multi-modal features and obtain fused BEV features. And based on the flexible adaptability of FusionFormer to the input modality representation, we propose a depth prediction branch that can be added to the framework to improve detection performance in camera-based detection tasks. In addition, we propose a plug-and-play temporal fusion module based on transformers that can fuse historical frame BEV features for more stable and reliable detection results. We evaluate our method on the nuScenes dataset and achieve 72.6% mAP and 75.1% NDS for 3D object detection tasks, outperforming state-of-the-art methods

    Transgenic plants of Petunia hybrida harboring the CYP2E1 gene efficiently remove benzene and toluene pollutants and improve resistance to formaldehyde

    Get PDF
    The CYP2E1 protein belongs to the P450 enzymes family and plays an important role in the metabolism of small molecular and organic pollutants. In this study we generated CYP2E1 transgenic plants of Petunia using Agrobacterium rhizogenes K599. PCR analysis confirmed that the regenerated plants contained the CYP2E1 transgene and the rolB gene of the Ri plasmid. Southern blotting revealed the presence of multiple copies of CYP2E1 in the genome of transgenic plants. Fluorescent quantitative PCR revealed exogenous CYP2E1 gene expression in CYP2E1 transgenic plants at various levels, whereas no like expression was detected in either GUS transgenic plants or wild-types. The absorption of benzene and toluene by transgenic plants was analyzed through quantitative gas chromatography. Transgenic plants with high CYP2E1 expression showed a significant increase in absorption capacity of environmental benzene and toluene, compared to control GUS transgenic and wild type plants. Furthermore, these plants also presented obvious improved resistance to formaldehyde. This study, besides being the first to reveal that the CYP2E1 gene enhances plant resistance to formaldehyde, also furnishes a new method for reducing pollutants, such as benzene, toluene and formaldehyde, by using transgenic flowering horticultural plants
    corecore